245 research outputs found

    Disturbance Observer-based Robust Control and Its Applications: 35th Anniversary Overview

    Full text link
    Disturbance Observer has been one of the most widely used robust control tools since it was proposed in 1983. This paper introduces the origins of Disturbance Observer and presents a survey of the major results on Disturbance Observer-based robust control in the last thirty-five years. Furthermore, it explains the analysis and synthesis techniques of Disturbance Observer-based robust control for linear and nonlinear systems by using a unified framework. In the last section, this paper presents concluding remarks on Disturbance Observer-based robust control and its engineering applications.Comment: 12 pages, 4 figure

    Mechatronics versus Robotics

    Get PDF
    In Bolton, mechatronics is defined as the integration of electronics, control engineering, and mechanical engineering, thus recognizing the fundamental role of control in joining electronics and mechanics. A robot is commonly considered as a typical mechatronic system, which integrates software, control, electronics, and mechanical designs in a synergistic manner. Robotics can be considered as a part of mechatronics; i.e., all robots are mechatronic systems, but not all mechatronic systems are robots. Advanced robots usually plan their actions by combining an assigned functional task with the knowledge about the environment in which they operate. By using a simplified approach, advanced robots could be defined as mechatronic devices governed by a smart brain, placed at a higher hierarchical level. Actuators are building blocks of any mechatronic system. Such systems, however, have a huge application span, ranging from low-cost consumer applications to high-end, high-precision industrial manufacturing equipment

    MEMS Gyroscopes for Consumers and Industrial Applications

    Get PDF
    none2mixedAntonello, Riccardo; Oboe, RobertoAntonello, Riccardo; Oboe, Robert

    Tag-based Visual Odometry Estimation for Indoor UAVs Localization

    Full text link
    The agility and versatility offered by UAV platforms still encounter obstacles for full exploitation in industrial applications due to their indoor usage limitations. A significant challenge in this sense is finding a reliable and cost-effective way to localize aerial vehicles in a GNSS-denied environment. In this paper, we focus on the visual-based positioning paradigm: high accuracy in UAVs position and orientation estimation is achieved by leveraging the potentials offered by a dense and size-heterogenous map of tags. In detail, we propose an efficient visual odometry procedure focusing on hierarchical tags selection, outliers removal, and multi-tag estimation fusion, to facilitate the visual-inertial reconciliation. Experimental results show the validity of the proposed localization architecture as compared to the state of the art

    Implementation and Characterization of Vibrotactile Interfaces

    Get PDF
    While a standard approach is more or less established for rendering basic vibratory cues in consumer electronics, the implementation of advanced vibrotactile feedback still requires designers and engineers to solve a number of technical issues. Several off-the-shelf vibration actuators are currently available, having different characteristics and limitations that should be considered in the design process. We suggest an iterative approach to design in which vibrotactile interfaces are validated by testing their accuracy in rendering vibratory cues and in measuring input gestures. Several examples of prototype interfaces yielding audio-haptic feedback are described, ranging from open-ended devices to musical interfaces, addressing their design and the characterization of their vibratory output

    A Multi-Instrument, Force-Feedback Keyboard

    No full text
    When playing a musical instrument, a player perceives not only the sound generated, but also the haptic interaction, arising during the contact between player and instrument. Such haptic interaction, based on the sense of touch, involves several senses in the player: tactile, kinesthetic (i.e., mediated by end organs located in muscles, tendons, and joints and stimulated by bodily movements and tensions), proprioceptive (i.e., of, relating to, or being stimuli arising within the organism), etc. By its na- ture, the haptic interaction is bidirectional, and this is exploited by musical instrument players, who can better correlate their actions on the instrument to the sound generated. For instance, by paying atten- tion to the interaction force between key and finger, arising during the descent of the key, pianists can detect the re-triggering of the escapement mecha- nisms and, in turn, can adjust the key motion to ob- tain the fastest repetition of the note. Roughly speaking, haptic information allows the player to perceive the \u201cstate\u201d of the mechanism be- ing manipulated through the key. By using this knowledge about the state of the mechanism and correlating it with the sound generated, the player learns a strategy to obtain desired tones. This tight correspondence between acoustic response and touch response, however, is lost in many electronic instruments (e.g., in standard commercial synthe- sizers), in which sound generation is related only to the key attack velocity and pressure. In this type of synthetic instrument, the touch feedback is inde- pendent of the instrument being simulated. For in- stance, the interaction with different instruments like harpsichord, piano, or pipe organ gives the same haptic information to the player. This constitutes a significant limitation for the musician, who loses expressive control of the instrument and, in turn, of the generated sound.This consideration led to several research activi- ties, aimed at the realization of an active keyboard, in which actuators connected to the keys are driven in such a way that the haptic interaction experi- enced is the same as if the player were interacting with the keyboard of the real instrument being em- ulated by the synthesizer (Baker 1988; Cadoz, Lisowski, and Florens 1990; Gillespie 1992; Gille- spie and Cutkosky 1992; Cadoz, Luciani, and Flo- rens 1993; Gillespie 1994). Such haptic displays are usually referred to as \u201cvirtual mechanisms,\u201d because they are designed for the reproduction of the touch feedback that a user would experience when interacting with an actual multi-body mechanism. This very simple example can be extended to multi-body mechanisms, composed of several parts, which interact with one another in terms of impacts, constraints, etc. In such a case, the motion of each part of the virtual mechanism must be calculated by a dynamic simulator, which incorporates all the characteristics of the real mechanism and computes the interaction forces among the parts. It is worth noting that, at times, an overly detailed description of the real mechanism leads to a bulky dynamic simulator, not suitable for real-time implementation, as is required in haptic interaction. Moreover, it is usually difficult to tune the parameters of the dynamic simulator, especially when the mechanism to be simulated contains several non- linear components, such as nonlinear dampers or constraints. Among all the possible keyboard-operated instruments, the grand piano has by far the most complicated mechanism (Topper and Wills 1987). The grand piano action, in fact, is composed of dozens of components and this, as we mentioned, has impeded the realization of a real-time dynamic simulator for it. A remarkable work by Gillespie and Cutkosky (1992) shows how it is possible to implement a very detailed model of the piano action and tune it by matching simulation and experimental results, the latter obtained by accurately measuring all dynamic and kinematic variables on an actual piano mechanism. However, the obtained model, even if it results in good agreement with experimental data, can run only offline. Given these considerations, several researchers have focused their work on the reproduction of only one or a few specific behaviors of the mechanism. For instance, Baker (1988) proposes the simulation of user-programmable inertial and viscous characteristics to adapt the key- board to the player\u2019s taste. Gillespie (1992, 1994), on the other hand, has studied the modeling of a simplified piano action, composed of only two bodies: the key and the hammer. Even with this very simple model, it is possible to reproduce part of the hammer motion, composed of three different phases: contact with the key, fly, and return on the key. This model, however, does not take into account the impact of the hammer with the string and the effect of escapement, even if such characteristics are very useful in regaining the previously mentioned correspondence between acoustic response and haptic interaction. This article presents the preliminary results obtained by the MIKEY (Multi-Instrument active KEYboard) project. The project is aimed at the realization of a multi-instrument active keyboard with realistic touch feedback. In particular, the instruments to be emulated are the grand piano, the harpsichord, and the Hammond organ. Given the previous consideration, it is clear that some tradeoff between model accuracy and real-time operability had to be made at the beginning of the project, especially for the grand piano. The research presented here started from the work of Gillespie and has been improved by adding some additional features, namely the hammer-string impact, various state- dependent hammer-key impacts, and the escapement effect. Also, to improve the quality of the haptic feedback, a direct-drive, low-friction motor has been used. Finally, particular attention has been paid to the cost of the overall system, by using inexpensive devices for sensing, actuation, and real-time computation. After introducing the models used in the dynamic simulator, the article describes the experimental setup realized. The experimental results obtained are then reported and compared with those obtained with a standard piano keyboard. Comments on the results presented conclude the article

    Web-Interfaced, Force-Reflecting Teleoperation Systems

    No full text
    An ever-growing number of Internet-connected de- vices is now accessible to a multitude of users. Being a ubiqui- tous communication means, the Internet could allow any user to reach and command any device connected to the network. This paper reports the successful application of real-time closed-loop control over the Internet in the Java Based Interface for Teler- obotics (JBIT) system, in which Internet users can access and com- mand a two-degrees-of-freedom robot in real time, receiving both visual and force feedback. When the closed-loop control of a re- mote system comes into play, careful evaluation of the performance and limits of the communication system in use is mandatory. The analysis reported shows that the main limits of the Internet are the unknown available throughput, the variable delay, and the loss of some data packets, in particular, when the network is congested. Once the limits of the communication system are known, it is shown that it is possible to use the Internet for the remote closed-loop control of a slave robot, provided that suitable strategies to guar- antee operability and safety of the controlled system have been im- plemented. The strategies implemented in order to overcome the limits posed by the present Internet characteristics are described, along with an improved coordinating force control scheme, which enhances the transparency of the teleoperator
    • …
    corecore